Respondent Mode Choice in a Smartphone Survey, United States, 2012 (ICPSR 37836)

Version Date: Oct 8, 2020 View help for published

Principal Investigator(s): View help for Principal Investigator(s)
Frederick G. Conrad, University of Michigan. Institute for Social Research. Survey Research Center; Michael F. Schober, The New School for Social Research (New York, N.Y.: 2005- ). Department of Psychology

https://doi.org/10.3886/ICPSR37836.v1

Version V1

Slide tabs to view more

Now that people on mobile devices can easily choose their mode of communication (e.g., voice, text, video) survey designers can allow respondents to answer questions in whatever mode they find momentarily convenient given their circumstances or that they chronically prefer. Investigators conducted an experiment to explore how mode choice affects response quality, participation, and satisfaction in smartphone interviews. Respondents were interviewed on their iPhone in one of four modes: Human Voice, Human Text, Automated Voice, and Automated Text. Respondents were either assigned the mode of their interview (Assigned Mode), in which case the contact and interviewing mode were the same, or they were required to choose the mode of their interview (Mode Choice) after being contacted in one of the four modes. 634 respondents completed the interview and a post-interview online debriefing questionnaire in the Assigned Mode group and 626 respondents completed the interview and online debriefing in the Assigned Mode group. This dataset contains 2691 cases, the 1,260 respondents who completed the interview and debriefing, as well as 1,431 cases that were invited to participate but ended their participation somewhere shy of the last debriefing question (either they did not choose a mode, did not answer the first question, started but did not finish the interview, or finished the interview but did not complete the debriefing). All respondents (who completed the interview) answered 32 questions from US social surveys. 13 interviewers from the University of Michigan Survey Research Center administered voice and text interviews (five administered interviews in both experimental conditions, three conducted only Assigned Mode interviews, and five conducted interviews in just the Mode Choice condition). Automated systems launched parallel text and voice interviews at the same time as the human interviews. Respondents who chose their interview modes provided more conscientious (fewer rounded and non-differentiated) answers, and they reported greater satisfaction with the interview. Although fewer respondents started the interview when given a choice of mode, a higher percentage of Mode Choice respondents who started the interview completed it. For certain mode transitions (e.g., from automated interview modes) there was no reduction in participation. The results demonstrate clear benefits and relatively few drawbacks resulting from mode choice, at least among these modes and with this sample of iPhone users, suggesting that further exploration of mode choice and the logistics of its implementation is warranted. Demographic variables include participants' gender, race, education level, and household income.

Conrad, Frederick G., and Schober, Michael F. Respondent Mode Choice in a Smartphone Survey, United States, 2012. Inter-university Consortium for Political and Social Research [distributor], 2020-10-08. https://doi.org/10.3886/ICPSR37836.v1

Export Citation:

  • RIS (generic format for RefWorks, EndNote, etc.)
  • EndNote
National Science Foundation. Directorate for Social, Behavioral and Economic Sciences (SES-1025645, SES-1026225)

Country

Inter-university Consortium for Political and Social Research
Hide

2012-03-01 -- 2012-05-31, 2012-07-01 -- 2012-09-30
2012-03-01 -- 2012-05-31, 2012-07-01 -- 2012-09-30
  1. This study was originally released in OpenICPSR, and can be found here.
  2. This study is related to ICPSR 37837 and ICPSR 37846.
Hide

Now that people on mobile devices can easily choose their mode of communication (e.g., voice, text, video) survey designers can allow respondents to answer questions in whatever mode they find momentarily convenient given their circumstances or that they chronically prefer. Investigators conducted an experiment to explore how mode choice affects response quality, participation, and satisfaction in smartphone interviews.

Respondents were interviewed on their iPhone in one of four modes: Human Voice, Human Text, Automated Voice, and Automated Text. Respondents were either assigned the mode of their interview (Assigned Mode), in which case the contact and interviewing mode were the same, or they were required to choose the mode of their interview (Mode Choice) after being contacted in one of the four modes. 634 respondents completed the interview and a post-interview online debriefing questionnaire in the Assigned Mode group and 626 respondents completed the interview and online debriefing in the Assigned Mode group.

This dataset contains 2691 cases, the 1,260 respondents who completed the interview and debriefing, as well as 1,431 cases that were invited to participate but ended their participation somewhere shy of the last debriefing question (either they did not choose a mode, did not answer the first question, started but did not finish the interview, or finished the interview but did not complete the debriefing). All respondents (who completed the interview) answered 32 questions from US social surveys. 13 interviewers from the University of Michigan Survey Research Center administered voice and text interviews (five administered interviews in both experimental conditions, three conducted only Assigned Mode interviews, and five conducted interviews in just the Mode Choice condition). Automated systems launched parallel text and voice interviews at the same time as the human interviews.

iPhone users were recruited from Craigslist, Facebook, Google Ads, and Amazon Mechanical Turk. They were asked to complete a screening questionnaire to determine if they were eligible to participate. To be eligible, one needed to be 21 or older, and own an iPhone with a US area code. Participants recruited were not intended to represent the US population, iPhone users, or smartphone users. The sample was designed to test experimental manipulations through random assignments and conditions on a consistent platform. Eligible participants who provided a telephone number in the screening questionnaire were sent a text message with a link to a web page. The web page captured the user-agent string to determine if the device was an iPhone. Once eligible, phone numbers were assigned a contact mode. Respondents in the Mode Choice group could choose to be interviewed in the contact mode or choose to be interviewed in one of the other three modes. Once the interview had been completed, respondents were sent a link via text message to a post-interview debriefing questionnaire concerning their experience. At the conclusion of the post-interview debriefing, respondents were sent a text message with a $20 iTunes gift code as a token of appreciation for their time.

Cross-sectional

iPhone users 21 years of age and older

Individual

The response rate below was calculated using American Association for Public Opinion Research Response Rate 2 (AAPOR RR2).

46.4% (654/1,409) - Mode Choice

50.5% (648/1,282) - Assigned Mode

Hide

2020-10-08

2020-10-08 ICPSR data undergo a confidentiality review and are altered when necessary to limit the risk of disclosure. ICPSR also routinely creates ready-to-go data files along with setups in the major statistical software formats as well as standard codebooks to accompany the data. In addition to these procedures, ICPSR performed the following processing steps for this data collection:

  • Created variable labels and/or value labels.
  • Created online analysis version with question text.
  • Checked for undocumented or out-of-range codes.

Hide

Notes